| Name | |
|---|---|
| Mahdi Asadolahzade | mahdiasadi140@gmail.com |
In this project, we explore the diverse applications of neural networks across various stages of data analysis, ranging from preprocessing and model training to noise analysis and denoising techniques. The goal is to leverage neural networks to address real-world data challenges and derive meaningful insights through structured experimentation.
In this phase, we will build a multi-layer perceptron (MLP) neural network to approximate several functions ranging from simple (like a linear equation) to complex (like a trigonometric function) within a specified domain. We will generate data points from these functions and use a portion of these points as our training set.
importing libraries we need
import tensorflow as tf
from tensorflow.keras.datasets import fashion_mnist
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score
import seaborn as sns
from PIL import Image
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D
Define a set of functions with inputs in one dimension (x) ranging from simple to complex. Generate data points from these functions within a specified domain
def linear_function(x):
return 2 * x + 3
def sinusoidal_function(x):
return np.sin(x)
def complicated_function(x):
return np.log(x**2 + 1)
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
y_train_linear = linear_function(x_train)
plt.figure(figsize=(8, 6))
plt.scatter(x_train, y_train_linear, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Sampled Data')
plt.legend()
plt.grid(True)
plt.show()
y_train_sinusoidal = sinusoidal_function(x_train)
plt.figure(figsize=(8, 6))
plt.scatter(x_train, y_train_sinusoidal, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Sampled Data')
plt.legend()
plt.grid(True)
plt.show()
y_train_complicated = complicated_function(x_train)
plt.figure(figsize=(8, 6))
plt.scatter(x_train, y_train_complicated, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Sampled Data')
plt.legend()
plt.grid(True)
plt.show()
Randomly select a portion of the generated data points as the training set.
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
x_train, x_test, y_train, y_test = train_test_split(x_train, y_train_linear, test_size=0.2, random_state=42)
Construct a multi-layer perceptron (MLP) model using TensorFlow/Keras and train it using the generated training data.
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test))
Epoch 1/100
c:\Users\Mahdi\anaconda3\Lib\site-packages\keras\src\layers\core\dense.py:86: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(activity_regularizer=activity_regularizer, **kwargs)
3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 66ms/step - loss: 39.4679 - mae: 5.3581 - val_loss: 31.3681 - val_mae: 4.7188 Epoch 2/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 37.8040 - mae: 5.2460 - val_loss: 29.6471 - val_mae: 4.5728 Epoch 3/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 31.1772 - mae: 4.6756 - val_loss: 28.0926 - val_mae: 4.4341 Epoch 4/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 29.5508 - mae: 4.5496 - val_loss: 26.5872 - val_mae: 4.3044 Epoch 5/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 27.7671 - mae: 4.3666 - val_loss: 25.0984 - val_mae: 4.1870 Epoch 6/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 25.7592 - mae: 4.3003 - val_loss: 23.6837 - val_mae: 4.0718 Epoch 7/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 26.6466 - mae: 4.3419 - val_loss: 22.3817 - val_mae: 3.9616 Epoch 8/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 22.7099 - mae: 3.9750 - val_loss: 21.1814 - val_mae: 3.8567 Epoch 9/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 23.0545 - mae: 3.9500 - val_loss: 20.0370 - val_mae: 3.7632 Epoch 10/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 20.0922 - mae: 3.7153 - val_loss: 18.8977 - val_mae: 3.6729 Epoch 11/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 19.2950 - mae: 3.5725 - val_loss: 17.7469 - val_mae: 3.5857 Epoch 12/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 16.1401 - mae: 3.3043 - val_loss: 16.6099 - val_mae: 3.4929 Epoch 13/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 16.1665 - mae: 3.2941 - val_loss: 15.4237 - val_mae: 3.3899 Epoch 14/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 14.6222 - mae: 3.0617 - val_loss: 14.2216 - val_mae: 3.2765 Epoch 15/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 14.1668 - mae: 3.1000 - val_loss: 13.0125 - val_mae: 3.1568 Epoch 16/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 12.6970 - mae: 2.8885 - val_loss: 11.8102 - val_mae: 3.0303 Epoch 17/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 10.7189 - mae: 2.6816 - val_loss: 10.6211 - val_mae: 2.8953 Epoch 18/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 10.7134 - mae: 2.7146 - val_loss: 9.4492 - val_mae: 2.7521 Epoch 19/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 8.4061 - mae: 2.3710 - val_loss: 8.3284 - val_mae: 2.6047 Epoch 20/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 7.5656 - mae: 2.2947 - val_loss: 7.2699 - val_mae: 2.4537 Epoch 21/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 6.5591 - mae: 2.1848 - val_loss: 6.2785 - val_mae: 2.2989 Epoch 22/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 5.5421 - mae: 2.0171 - val_loss: 5.3557 - val_mae: 2.1404 Epoch 23/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 4.4384 - mae: 1.8032 - val_loss: 4.5007 - val_mae: 1.9780 Epoch 24/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 3.7260 - mae: 1.7128 - val_loss: 3.7393 - val_mae: 1.8159 Epoch 25/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 3.0565 - mae: 1.5527 - val_loss: 3.0788 - val_mae: 1.6542 Epoch 26/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 2.4456 - mae: 1.3924 - val_loss: 2.5443 - val_mae: 1.5004 Epoch 27/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 1.9725 - mae: 1.2236 - val_loss: 2.1268 - val_mae: 1.3573 Epoch 28/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 1.7181 - mae: 1.1483 - val_loss: 1.7983 - val_mae: 1.2232 Epoch 29/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 1.4081 - mae: 0.9756 - val_loss: 1.5556 - val_mae: 1.1180 Epoch 30/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 1.2846 - mae: 0.9286 - val_loss: 1.3821 - val_mae: 1.0421 Epoch 31/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 1.1216 - mae: 0.8664 - val_loss: 1.2638 - val_mae: 0.9771 Epoch 32/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 1.1940 - mae: 0.8990 - val_loss: 1.1802 - val_mae: 0.9243 Epoch 33/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 1.1052 - mae: 0.8797 - val_loss: 1.1310 - val_mae: 0.8912 Epoch 34/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 1.1066 - mae: 0.8774 - val_loss: 1.1006 - val_mae: 0.8770 Epoch 35/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 1.0367 - mae: 0.8445 - val_loss: 1.0744 - val_mae: 0.8683 Epoch 36/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 1.1726 - mae: 0.9371 - val_loss: 1.0513 - val_mae: 0.8588 Epoch 37/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 1.0622 - mae: 0.8727 - val_loss: 1.0291 - val_mae: 0.8489 Epoch 38/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 1.0383 - mae: 0.8535 - val_loss: 1.0106 - val_mae: 0.8430 Epoch 39/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 1.0567 - mae: 0.8612 - val_loss: 0.9979 - val_mae: 0.8428 Epoch 40/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.9199 - mae: 0.7952 - val_loss: 0.9942 - val_mae: 0.8461 Epoch 41/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.9317 - mae: 0.8039 - val_loss: 0.9760 - val_mae: 0.8455 Epoch 42/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.8848 - mae: 0.7822 - val_loss: 0.9524 - val_mae: 0.8414 Epoch 43/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.8820 - mae: 0.7770 - val_loss: 0.9280 - val_mae: 0.8321 Epoch 44/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.8094 - mae: 0.7430 - val_loss: 0.9047 - val_mae: 0.8250 Epoch 45/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.7877 - mae: 0.7294 - val_loss: 0.8830 - val_mae: 0.8131 Epoch 46/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 23ms/step - loss: 0.7487 - mae: 0.7183 - val_loss: 0.8513 - val_mae: 0.7978 Epoch 47/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.8039 - mae: 0.7545 - val_loss: 0.8098 - val_mae: 0.7737 Epoch 48/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.7511 - mae: 0.7268 - val_loss: 0.7779 - val_mae: 0.7557 Epoch 49/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.6867 - mae: 0.6834 - val_loss: 0.7526 - val_mae: 0.7436 Epoch 50/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.6680 - mae: 0.6797 - val_loss: 0.7258 - val_mae: 0.7362 Epoch 51/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.6634 - mae: 0.6855 - val_loss: 0.7047 - val_mae: 0.7259 Epoch 52/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step - loss: 0.6389 - mae: 0.6612 - val_loss: 0.6769 - val_mae: 0.7106 Epoch 53/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.6238 - mae: 0.6465 - val_loss: 0.6427 - val_mae: 0.6888 Epoch 54/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.5529 - mae: 0.6154 - val_loss: 0.6168 - val_mae: 0.6735 Epoch 55/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.5284 - mae: 0.6012 - val_loss: 0.5913 - val_mae: 0.6636 Epoch 56/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.5529 - mae: 0.6115 - val_loss: 0.5694 - val_mae: 0.6548 Epoch 57/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.4905 - mae: 0.5789 - val_loss: 0.5447 - val_mae: 0.6395 Epoch 58/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.5102 - mae: 0.5959 - val_loss: 0.5177 - val_mae: 0.6229 Epoch 59/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.4610 - mae: 0.5582 - val_loss: 0.4924 - val_mae: 0.6048 Epoch 60/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step - loss: 0.4183 - mae: 0.5354 - val_loss: 0.4680 - val_mae: 0.5887 Epoch 61/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.4330 - mae: 0.5447 - val_loss: 0.4364 - val_mae: 0.5669 Epoch 62/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.4184 - mae: 0.5397 - val_loss: 0.4124 - val_mae: 0.5490 Epoch 63/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.3828 - mae: 0.5133 - val_loss: 0.3985 - val_mae: 0.5425 Epoch 64/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.3683 - mae: 0.5072 - val_loss: 0.3781 - val_mae: 0.5284 Epoch 65/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.3218 - mae: 0.4653 - val_loss: 0.3533 - val_mae: 0.5094 Epoch 66/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.2943 - mae: 0.4423 - val_loss: 0.3290 - val_mae: 0.4898 Epoch 67/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.2947 - mae: 0.4479 - val_loss: 0.3128 - val_mae: 0.4778 Epoch 68/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.2668 - mae: 0.4290 - val_loss: 0.2911 - val_mae: 0.4572 Epoch 69/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.2565 - mae: 0.4144 - val_loss: 0.2710 - val_mae: 0.4413 Epoch 70/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.2158 - mae: 0.3746 - val_loss: 0.2576 - val_mae: 0.4335 Epoch 71/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.2269 - mae: 0.3891 - val_loss: 0.2355 - val_mae: 0.4114 Epoch 72/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.2033 - mae: 0.3655 - val_loss: 0.2161 - val_mae: 0.3896 Epoch 73/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.2074 - mae: 0.3740 - val_loss: 0.1997 - val_mae: 0.3732 Epoch 74/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.1846 - mae: 0.3479 - val_loss: 0.1931 - val_mae: 0.3736 Epoch 75/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.1556 - mae: 0.3117 - val_loss: 0.1787 - val_mae: 0.3595 Epoch 76/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.1363 - mae: 0.2926 - val_loss: 0.1596 - val_mae: 0.3350 Epoch 77/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.1267 - mae: 0.2799 - val_loss: 0.1435 - val_mae: 0.3128 Epoch 78/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.1254 - mae: 0.2824 - val_loss: 0.1303 - val_mae: 0.2972 Epoch 79/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.1118 - mae: 0.2581 - val_loss: 0.1210 - val_mae: 0.2883 Epoch 80/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0979 - mae: 0.2409 - val_loss: 0.1136 - val_mae: 0.2803 Epoch 81/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0966 - mae: 0.2381 - val_loss: 0.1024 - val_mae: 0.2631 Epoch 82/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0860 - mae: 0.2209 - val_loss: 0.0905 - val_mae: 0.2432 Epoch 83/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0734 - mae: 0.2019 - val_loss: 0.0827 - val_mae: 0.2306 Epoch 84/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.0705 - mae: 0.1955 - val_loss: 0.0736 - val_mae: 0.2135 Epoch 85/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0564 - mae: 0.1665 - val_loss: 0.0672 - val_mae: 0.2017 Epoch 86/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.0514 - mae: 0.1572 - val_loss: 0.0593 - val_mae: 0.1847 Epoch 87/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.0465 - mae: 0.1504 - val_loss: 0.0524 - val_mae: 0.1685 Epoch 88/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0389 - mae: 0.1322 - val_loss: 0.0470 - val_mae: 0.1564 Epoch 89/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0404 - mae: 0.1430 - val_loss: 0.0434 - val_mae: 0.1533 Epoch 90/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0371 - mae: 0.1358 - val_loss: 0.0388 - val_mae: 0.1477 Epoch 91/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0317 - mae: 0.1273 - val_loss: 0.0324 - val_mae: 0.1362 Epoch 92/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0271 - mae: 0.1195 - val_loss: 0.0280 - val_mae: 0.1265 Epoch 93/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0276 - mae: 0.1250 - val_loss: 0.0254 - val_mae: 0.1224 Epoch 94/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0220 - mae: 0.1082 - val_loss: 0.0236 - val_mae: 0.1211 Epoch 95/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0213 - mae: 0.1081 - val_loss: 0.0228 - val_mae: 0.1232 Epoch 96/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0189 - mae: 0.1023 - val_loss: 0.0214 - val_mae: 0.1215 Epoch 97/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0158 - mae: 0.0911 - val_loss: 0.0187 - val_mae: 0.1132 Epoch 98/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0173 - mae: 0.1040 - val_loss: 0.0163 - val_mae: 0.1047 Epoch 99/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0156 - mae: 0.0979 - val_loss: 0.0154 - val_mae: 0.1034 Epoch 100/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0131 - mae: 0.0878 - val_loss: 0.0145 - val_mae: 0.1016
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
x_range = np.linspace(-5, 5, 100)
y_actual = linear_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label='Predicted Function', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Actual vs. Predicted Function')
plt.legend()
plt.grid(True)
plt.show()
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step
y_predicted = np.squeeze(y_predicted)
mae = np.mean(np.abs(y_actual - y_predicted))
print(f'Mean Absolute Error (MAE): {mae}')
Mean Absolute Error (MAE): 0.09004845462287918
x_test = np.linspace(0.1, 5, 50)
y_test = linear_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
Test Loss (MSE): 0.018191348761320114
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
x_train, x_test, y_train, y_test = train_test_split(x_train, y_train_sinusoidal, test_size=0.2, random_state=42)
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test))
Epoch 1/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 79ms/step - loss: 0.4726 - mae: 0.6080 - val_loss: 0.6465 - val_mae: 0.7403 Epoch 2/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.4595 - mae: 0.5870 - val_loss: 0.6789 - val_mae: 0.7585 Epoch 3/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - loss: 0.5005 - mae: 0.6127 - val_loss: 0.6641 - val_mae: 0.7586 Epoch 4/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.4279 - mae: 0.5689 - val_loss: 0.6596 - val_mae: 0.7562 Epoch 5/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.4410 - mae: 0.5735 - val_loss: 0.6461 - val_mae: 0.7451 Epoch 6/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - loss: 0.4391 - mae: 0.5771 - val_loss: 0.6308 - val_mae: 0.7369 Epoch 7/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.4180 - mae: 0.5623 - val_loss: 0.6169 - val_mae: 0.7267 Epoch 8/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 20ms/step - loss: 0.4078 - mae: 0.5516 - val_loss: 0.5929 - val_mae: 0.7145 Epoch 9/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.4167 - mae: 0.5659 - val_loss: 0.5769 - val_mae: 0.7054 Epoch 10/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.4135 - mae: 0.5672 - val_loss: 0.5694 - val_mae: 0.6966 Epoch 11/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.4058 - mae: 0.5599 - val_loss: 0.5556 - val_mae: 0.6843 Epoch 12/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.3538 - mae: 0.5098 - val_loss: 0.5467 - val_mae: 0.6729 Epoch 13/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.3416 - mae: 0.4987 - val_loss: 0.5302 - val_mae: 0.6640 Epoch 14/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.3572 - mae: 0.5065 - val_loss: 0.4941 - val_mae: 0.6482 Epoch 15/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.3633 - mae: 0.5316 - val_loss: 0.4818 - val_mae: 0.6417 Epoch 16/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.3414 - mae: 0.5171 - val_loss: 0.4757 - val_mae: 0.6340 Epoch 17/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.2979 - mae: 0.4724 - val_loss: 0.4579 - val_mae: 0.6198 Epoch 18/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.3128 - mae: 0.4896 - val_loss: 0.4454 - val_mae: 0.6065 Epoch 19/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.3130 - mae: 0.4844 - val_loss: 0.4284 - val_mae: 0.5881 Epoch 20/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.2946 - mae: 0.4647 - val_loss: 0.4097 - val_mae: 0.5673 Epoch 21/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.2482 - mae: 0.4199 - val_loss: 0.3817 - val_mae: 0.5492 Epoch 22/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.2539 - mae: 0.4269 - val_loss: 0.3508 - val_mae: 0.5273 Epoch 23/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.2410 - mae: 0.4235 - val_loss: 0.3336 - val_mae: 0.5148 Epoch 24/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.2246 - mae: 0.4105 - val_loss: 0.3305 - val_mae: 0.5113 Epoch 25/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.1996 - mae: 0.3745 - val_loss: 0.3212 - val_mae: 0.4970 Epoch 26/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.2099 - mae: 0.3868 - val_loss: 0.3059 - val_mae: 0.4802 Epoch 27/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.1889 - mae: 0.3609 - val_loss: 0.2841 - val_mae: 0.4586 Epoch 28/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.1695 - mae: 0.3385 - val_loss: 0.2515 - val_mae: 0.4307 Epoch 29/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.1481 - mae: 0.3156 - val_loss: 0.2288 - val_mae: 0.4065 Epoch 30/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.1367 - mae: 0.3042 - val_loss: 0.2171 - val_mae: 0.3903 Epoch 31/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.1344 - mae: 0.3018 - val_loss: 0.2015 - val_mae: 0.3736 Epoch 32/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.1206 - mae: 0.2829 - val_loss: 0.1857 - val_mae: 0.3557 Epoch 33/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.1112 - mae: 0.2646 - val_loss: 0.1704 - val_mae: 0.3368 Epoch 34/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.1057 - mae: 0.2594 - val_loss: 0.1560 - val_mae: 0.3237 Epoch 35/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0928 - mae: 0.2415 - val_loss: 0.1446 - val_mae: 0.3111 Epoch 36/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0782 - mae: 0.2213 - val_loss: 0.1307 - val_mae: 0.2939 Epoch 37/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0747 - mae: 0.2133 - val_loss: 0.1131 - val_mae: 0.2730 Epoch 38/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0636 - mae: 0.1990 - val_loss: 0.1029 - val_mae: 0.2583 Epoch 39/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0546 - mae: 0.1817 - val_loss: 0.0886 - val_mae: 0.2402 Epoch 40/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0544 - mae: 0.1838 - val_loss: 0.0742 - val_mae: 0.2235 Epoch 41/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0453 - mae: 0.1684 - val_loss: 0.0754 - val_mae: 0.2185 Epoch 42/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0402 - mae: 0.1500 - val_loss: 0.0678 - val_mae: 0.2068 Epoch 43/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0374 - mae: 0.1457 - val_loss: 0.0571 - val_mae: 0.1931 Epoch 44/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0295 - mae: 0.1294 - val_loss: 0.0537 - val_mae: 0.1857 Epoch 45/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0309 - mae: 0.1305 - val_loss: 0.0466 - val_mae: 0.1667 Epoch 46/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0276 - mae: 0.1212 - val_loss: 0.0403 - val_mae: 0.1515 Epoch 47/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0209 - mae: 0.1044 - val_loss: 0.0356 - val_mae: 0.1507 Epoch 48/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0232 - mae: 0.1144 - val_loss: 0.0328 - val_mae: 0.1496 Epoch 49/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0215 - mae: 0.1090 - val_loss: 0.0289 - val_mae: 0.1308 Epoch 50/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0174 - mae: 0.0934 - val_loss: 0.0258 - val_mae: 0.1234 Epoch 51/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0174 - mae: 0.0943 - val_loss: 0.0206 - val_mae: 0.1134 Epoch 52/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0177 - mae: 0.1019 - val_loss: 0.0197 - val_mae: 0.1132 Epoch 53/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0152 - mae: 0.0926 - val_loss: 0.0186 - val_mae: 0.1082 Epoch 54/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0154 - mae: 0.0919 - val_loss: 0.0170 - val_mae: 0.1000 Epoch 55/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0129 - mae: 0.0813 - val_loss: 0.0143 - val_mae: 0.0944 Epoch 56/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.0133 - mae: 0.0900 - val_loss: 0.0133 - val_mae: 0.0902 Epoch 57/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0120 - mae: 0.0817 - val_loss: 0.0126 - val_mae: 0.0845 Epoch 58/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0106 - mae: 0.0758 - val_loss: 0.0119 - val_mae: 0.0869 Epoch 59/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0118 - mae: 0.0821 - val_loss: 0.0109 - val_mae: 0.0811 Epoch 60/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0105 - mae: 0.0771 - val_loss: 0.0088 - val_mae: 0.0669 Epoch 61/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0117 - mae: 0.0805 - val_loss: 0.0085 - val_mae: 0.0646 Epoch 62/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0099 - mae: 0.0690 - val_loss: 0.0098 - val_mae: 0.0816 Epoch 63/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0109 - mae: 0.0808 - val_loss: 0.0103 - val_mae: 0.0811 Epoch 64/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0112 - mae: 0.0803 - val_loss: 0.0078 - val_mae: 0.0676 Epoch 65/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0111 - mae: 0.0830 - val_loss: 0.0061 - val_mae: 0.0561 Epoch 66/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0098 - mae: 0.0743 - val_loss: 0.0079 - val_mae: 0.0707 Epoch 67/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0099 - mae: 0.0776 - val_loss: 0.0095 - val_mae: 0.0776 Epoch 68/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0104 - mae: 0.0764 - val_loss: 0.0073 - val_mae: 0.0646 Epoch 69/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0100 - mae: 0.0722 - val_loss: 0.0055 - val_mae: 0.0552 Epoch 70/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0107 - mae: 0.0790 - val_loss: 0.0051 - val_mae: 0.0492 Epoch 71/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0098 - mae: 0.0736 - val_loss: 0.0060 - val_mae: 0.0586 Epoch 72/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0100 - mae: 0.0747 - val_loss: 0.0063 - val_mae: 0.0601 Epoch 73/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0091 - mae: 0.0688 - val_loss: 0.0057 - val_mae: 0.0560 Epoch 74/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0087 - mae: 0.0657 - val_loss: 0.0050 - val_mae: 0.0501 Epoch 75/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0081 - mae: 0.0653 - val_loss: 0.0051 - val_mae: 0.0474 Epoch 76/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0096 - mae: 0.0726 - val_loss: 0.0037 - val_mae: 0.0377 Epoch 77/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0100 - mae: 0.0727 - val_loss: 0.0039 - val_mae: 0.0410 Epoch 78/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0111 - mae: 0.0734 - val_loss: 0.0057 - val_mae: 0.0562 Epoch 79/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.0099 - mae: 0.0719 - val_loss: 0.0049 - val_mae: 0.0482 Epoch 80/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0083 - mae: 0.0638 - val_loss: 0.0038 - val_mae: 0.0412 Epoch 81/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0101 - mae: 0.0684 - val_loss: 0.0038 - val_mae: 0.0392 Epoch 82/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0074 - mae: 0.0601 - val_loss: 0.0061 - val_mae: 0.0588 Epoch 83/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0099 - mae: 0.0746 - val_loss: 0.0051 - val_mae: 0.0497 Epoch 84/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0085 - mae: 0.0667 - val_loss: 0.0047 - val_mae: 0.0475 Epoch 85/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0089 - mae: 0.0672 - val_loss: 0.0041 - val_mae: 0.0423 Epoch 86/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0081 - mae: 0.0653 - val_loss: 0.0057 - val_mae: 0.0518 Epoch 87/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.0107 - mae: 0.0775 - val_loss: 0.0052 - val_mae: 0.0505 Epoch 88/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0088 - mae: 0.0648 - val_loss: 0.0040 - val_mae: 0.0422 Epoch 89/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0091 - mae: 0.0703 - val_loss: 0.0060 - val_mae: 0.0548 Epoch 90/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0088 - mae: 0.0667 - val_loss: 0.0054 - val_mae: 0.0479 Epoch 91/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.0104 - mae: 0.0749 - val_loss: 0.0032 - val_mae: 0.0355 Epoch 92/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0098 - mae: 0.0704 - val_loss: 0.0040 - val_mae: 0.0426 Epoch 93/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0088 - mae: 0.0690 - val_loss: 0.0085 - val_mae: 0.0727 Epoch 94/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 21ms/step - loss: 0.0093 - mae: 0.0708 - val_loss: 0.0061 - val_mae: 0.0537 Epoch 95/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0104 - mae: 0.0753 - val_loss: 0.0036 - val_mae: 0.0451 Epoch 96/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0110 - mae: 0.0797 - val_loss: 0.0041 - val_mae: 0.0415 Epoch 97/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0083 - mae: 0.0656 - val_loss: 0.0069 - val_mae: 0.0660 Epoch 98/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0099 - mae: 0.0774 - val_loss: 0.0048 - val_mae: 0.0493 Epoch 99/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0092 - mae: 0.0705 - val_loss: 0.0034 - val_mae: 0.0377 Epoch 100/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0100 - mae: 0.0743 - val_loss: 0.0042 - val_mae: 0.0418
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
x_range = np.linspace(-5, 5, 1000)
y_actual = sinusoidal_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label='Predicted Function', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Actual vs. Predicted Function')
plt.legend()
plt.grid(True)
plt.show()
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step
y_predicted = np.squeeze(y_predicted)
mae = np.mean(np.abs(y_actual - y_predicted))
print(f'Mean Absolute Error (MAE): {mae}')
Mean Absolute Error (MAE): 0.06445569229047167
x_test = np.linspace(0.1, 5, 50)
y_test = sinusoidal_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
Test Loss (MSE): 0.009555906057357788
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
x_train, x_test, y_train, y_test = train_test_split(x_train, y_train_complicated, test_size=0.2, random_state=42)
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test))
Epoch 1/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 69ms/step - loss: 4.9959 - mae: 1.9220 - val_loss: 2.4807 - val_mae: 1.3455 Epoch 2/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 3.5662 - mae: 1.6252 - val_loss: 1.8237 - val_mae: 1.1471 Epoch 3/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 2.7663 - mae: 1.4541 - val_loss: 1.3022 - val_mae: 0.9641 Epoch 4/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 1.8168 - mae: 1.1580 - val_loss: 0.9048 - val_mae: 0.8023 Epoch 5/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 1.2002 - mae: 0.9377 - val_loss: 0.5968 - val_mae: 0.6510 Epoch 6/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.7634 - mae: 0.7581 - val_loss: 0.3650 - val_mae: 0.5144 Epoch 7/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.4400 - mae: 0.5757 - val_loss: 0.2044 - val_mae: 0.3974 Epoch 8/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.1959 - mae: 0.3934 - val_loss: 0.1050 - val_mae: 0.2926 Epoch 9/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0948 - mae: 0.2680 - val_loss: 0.0533 - val_mae: 0.2051 Epoch 10/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0414 - mae: 0.1758 - val_loss: 0.0334 - val_mae: 0.1550 Epoch 11/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0343 - mae: 0.1515 - val_loss: 0.0311 - val_mae: 0.1499 Epoch 12/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0428 - mae: 0.1643 - val_loss: 0.0346 - val_mae: 0.1506 Epoch 13/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0550 - mae: 0.1941 - val_loss: 0.0362 - val_mae: 0.1452 Epoch 14/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0537 - mae: 0.1919 - val_loss: 0.0339 - val_mae: 0.1347 Epoch 15/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0489 - mae: 0.1767 - val_loss: 0.0296 - val_mae: 0.1250 Epoch 16/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0390 - mae: 0.1594 - val_loss: 0.0261 - val_mae: 0.1247 Epoch 17/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0246 - mae: 0.1253 - val_loss: 0.0252 - val_mae: 0.1318 Epoch 18/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0205 - mae: 0.1184 - val_loss: 0.0264 - val_mae: 0.1474 Epoch 19/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0223 - mae: 0.1290 - val_loss: 0.0288 - val_mae: 0.1594 Epoch 20/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0236 - mae: 0.1333 - val_loss: 0.0299 - val_mae: 0.1635 Epoch 21/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0224 - mae: 0.1292 - val_loss: 0.0297 - val_mae: 0.1625 Epoch 22/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0233 - mae: 0.1318 - val_loss: 0.0284 - val_mae: 0.1577 Epoch 23/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0232 - mae: 0.1315 - val_loss: 0.0268 - val_mae: 0.1517 Epoch 24/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0219 - mae: 0.1277 - val_loss: 0.0254 - val_mae: 0.1457 Epoch 25/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0207 - mae: 0.1234 - val_loss: 0.0244 - val_mae: 0.1402 Epoch 26/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0204 - mae: 0.1217 - val_loss: 0.0237 - val_mae: 0.1366 Epoch 27/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0220 - mae: 0.1271 - val_loss: 0.0233 - val_mae: 0.1352 Epoch 28/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0203 - mae: 0.1215 - val_loss: 0.0230 - val_mae: 0.1347 Epoch 29/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0192 - mae: 0.1177 - val_loss: 0.0230 - val_mae: 0.1363 Epoch 30/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0198 - mae: 0.1206 - val_loss: 0.0229 - val_mae: 0.1370 Epoch 31/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0200 - mae: 0.1225 - val_loss: 0.0227 - val_mae: 0.1373 Epoch 32/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0192 - mae: 0.1194 - val_loss: 0.0225 - val_mae: 0.1380 Epoch 33/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0187 - mae: 0.1178 - val_loss: 0.0225 - val_mae: 0.1391 Epoch 34/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0177 - mae: 0.1143 - val_loss: 0.0222 - val_mae: 0.1382 Epoch 35/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.0194 - mae: 0.1220 - val_loss: 0.0219 - val_mae: 0.1373 Epoch 36/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0171 - mae: 0.1126 - val_loss: 0.0215 - val_mae: 0.1359 Epoch 37/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0175 - mae: 0.1138 - val_loss: 0.0213 - val_mae: 0.1361 Epoch 38/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.0186 - mae: 0.1191 - val_loss: 0.0212 - val_mae: 0.1365 Epoch 39/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0171 - mae: 0.1135 - val_loss: 0.0207 - val_mae: 0.1343 Epoch 40/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0172 - mae: 0.1146 - val_loss: 0.0203 - val_mae: 0.1330 Epoch 41/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.0172 - mae: 0.1150 - val_loss: 0.0198 - val_mae: 0.1311 Epoch 42/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step - loss: 0.0166 - mae: 0.1129 - val_loss: 0.0199 - val_mae: 0.1320 Epoch 43/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0162 - mae: 0.1116 - val_loss: 0.0195 - val_mae: 0.1306 Epoch 44/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0160 - mae: 0.1100 - val_loss: 0.0192 - val_mae: 0.1290 Epoch 45/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0162 - mae: 0.1113 - val_loss: 0.0189 - val_mae: 0.1280 Epoch 46/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0154 - mae: 0.1078 - val_loss: 0.0185 - val_mae: 0.1267 Epoch 47/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0150 - mae: 0.1065 - val_loss: 0.0185 - val_mae: 0.1276 Epoch 48/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0153 - mae: 0.1091 - val_loss: 0.0183 - val_mae: 0.1270 Epoch 49/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0140 - mae: 0.1038 - val_loss: 0.0176 - val_mae: 0.1247 Epoch 50/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0141 - mae: 0.1050 - val_loss: 0.0171 - val_mae: 0.1233 Epoch 51/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.0138 - mae: 0.1040 - val_loss: 0.0166 - val_mae: 0.1212 Epoch 52/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0134 - mae: 0.1012 - val_loss: 0.0162 - val_mae: 0.1201 Epoch 53/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0138 - mae: 0.1040 - val_loss: 0.0162 - val_mae: 0.1200 Epoch 54/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0141 - mae: 0.1047 - val_loss: 0.0161 - val_mae: 0.1197 Epoch 55/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0136 - mae: 0.1041 - val_loss: 0.0156 - val_mae: 0.1178 Epoch 56/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step - loss: 0.0133 - mae: 0.1027 - val_loss: 0.0152 - val_mae: 0.1152 Epoch 57/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.0138 - mae: 0.1038 - val_loss: 0.0149 - val_mae: 0.1137 Epoch 58/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0124 - mae: 0.0974 - val_loss: 0.0146 - val_mae: 0.1122 Epoch 59/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0120 - mae: 0.0955 - val_loss: 0.0145 - val_mae: 0.1119 Epoch 60/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0132 - mae: 0.1017 - val_loss: 0.0144 - val_mae: 0.1115 Epoch 61/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0117 - mae: 0.0954 - val_loss: 0.0140 - val_mae: 0.1101 Epoch 62/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0113 - mae: 0.0936 - val_loss: 0.0136 - val_mae: 0.1083 Epoch 63/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0123 - mae: 0.0966 - val_loss: 0.0134 - val_mae: 0.1075 Epoch 64/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0117 - mae: 0.0952 - val_loss: 0.0130 - val_mae: 0.1057 Epoch 65/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0119 - mae: 0.0962 - val_loss: 0.0129 - val_mae: 0.1048 Epoch 66/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0109 - mae: 0.0901 - val_loss: 0.0126 - val_mae: 0.1036 Epoch 67/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0108 - mae: 0.0901 - val_loss: 0.0124 - val_mae: 0.1025 Epoch 68/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.0109 - mae: 0.0911 - val_loss: 0.0122 - val_mae: 0.1015 Epoch 69/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0115 - mae: 0.0905 - val_loss: 0.0122 - val_mae: 0.1015 Epoch 70/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0107 - mae: 0.0872 - val_loss: 0.0120 - val_mae: 0.1002 Epoch 71/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step - loss: 0.0107 - mae: 0.0886 - val_loss: 0.0117 - val_mae: 0.0982 Epoch 72/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0108 - mae: 0.0883 - val_loss: 0.0117 - val_mae: 0.0980 Epoch 73/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - loss: 0.0112 - mae: 0.0899 - val_loss: 0.0117 - val_mae: 0.0976 Epoch 74/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0103 - mae: 0.0860 - val_loss: 0.0114 - val_mae: 0.0962 Epoch 75/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0107 - mae: 0.0875 - val_loss: 0.0113 - val_mae: 0.0952 Epoch 76/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0098 - mae: 0.0831 - val_loss: 0.0108 - val_mae: 0.0929 Epoch 77/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0097 - mae: 0.0822 - val_loss: 0.0105 - val_mae: 0.0912 Epoch 78/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0102 - mae: 0.0839 - val_loss: 0.0103 - val_mae: 0.0906 Epoch 79/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0102 - mae: 0.0849 - val_loss: 0.0100 - val_mae: 0.0888 Epoch 80/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0093 - mae: 0.0802 - val_loss: 0.0099 - val_mae: 0.0881 Epoch 81/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0104 - mae: 0.0857 - val_loss: 0.0100 - val_mae: 0.0883 Epoch 82/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0108 - mae: 0.0859 - val_loss: 0.0102 - val_mae: 0.0890 Epoch 83/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - loss: 0.0101 - mae: 0.0835 - val_loss: 0.0102 - val_mae: 0.0893 Epoch 84/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.0099 - mae: 0.0818 - val_loss: 0.0102 - val_mae: 0.0891 Epoch 85/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.0104 - mae: 0.0843 - val_loss: 0.0102 - val_mae: 0.0887 Epoch 86/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.0098 - mae: 0.0832 - val_loss: 0.0099 - val_mae: 0.0874 Epoch 87/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0097 - mae: 0.0814 - val_loss: 0.0099 - val_mae: 0.0871 Epoch 88/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.0091 - mae: 0.0773 - val_loss: 0.0096 - val_mae: 0.0859 Epoch 89/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 19ms/step - loss: 0.0093 - mae: 0.0794 - val_loss: 0.0094 - val_mae: 0.0843 Epoch 90/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - loss: 0.0090 - mae: 0.0764 - val_loss: 0.0093 - val_mae: 0.0835 Epoch 91/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0104 - mae: 0.0835 - val_loss: 0.0094 - val_mae: 0.0836 Epoch 92/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0094 - mae: 0.0776 - val_loss: 0.0093 - val_mae: 0.0828 Epoch 93/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 17ms/step - loss: 0.0093 - mae: 0.0776 - val_loss: 0.0091 - val_mae: 0.0820 Epoch 94/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0093 - mae: 0.0773 - val_loss: 0.0089 - val_mae: 0.0810 Epoch 95/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0095 - mae: 0.0789 - val_loss: 0.0086 - val_mae: 0.0798 Epoch 96/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0090 - mae: 0.0780 - val_loss: 0.0083 - val_mae: 0.0784 Epoch 97/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step - loss: 0.0096 - mae: 0.0794 - val_loss: 0.0083 - val_mae: 0.0779 Epoch 98/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step - loss: 0.0090 - mae: 0.0774 - val_loss: 0.0081 - val_mae: 0.0766 Epoch 99/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step - loss: 0.0085 - mae: 0.0742 - val_loss: 0.0082 - val_mae: 0.0769 Epoch 100/100 3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step - loss: 0.0086 - mae: 0.0736 - val_loss: 0.0084 - val_mae: 0.0786
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
x_range = np.linspace(-5, 5, 100)
y_actual = complicated_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label='Predicted Function', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Actual vs. Predicted Function')
plt.legend()
plt.grid(True)
plt.show()
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
y_predicted = np.squeeze(y_predicted)
mae = np.mean(np.abs(y_actual - y_predicted))
print(f'Mean Absolute Error (MAE): {mae}')
Mean Absolute Error (MAE): 0.08308614802300479
x_test = np.linspace(0.1, 5, 50)
y_test = complicated_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
Test Loss (MSE): 0.009957339614629745
# Define a sample function (e.g., sinusoidal)
def cosusoidal_function(x):
return np.cos(x)
# Generate different numbers of input points
num_points_list = [50, 100, 200]
for num_points in num_points_list:
# Generate training data
np.random.seed(0)
x_train = np.random.uniform(-5, 5, num_points)
y_train = sinusoidal_function(x_train)
# Build and train the model
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, verbose=0)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = sinusoidal_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function (N={num_points})', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Number of Input Points (N={num_points})')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = sinusoidal_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
WARNING:tensorflow:5 out of the last 37 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x0000024DBE1FEF20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. 1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 45ms/stepWARNING:tensorflow:6 out of the last 40 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x0000024DBE1FEF20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 18ms/step
Test Loss (MSE): 0.010831722989678383 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.0094144307076931 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
Test Loss (MSE): 0.003951942548155785
Number of Input Points: As we increase the number of input points (N), the model tends to fit the data more closely, capturing the underlying function more accurately
# Define simple and complex functions
def new_linear_function(x):
return 4 * x + 3
def new_sinusoidal_function(x):
return np.sin(x+2)
def new_complicated_function(x):
return np.log(x**2 + 2*x+1)
# Generate training data for different functions
functions = [new_linear_function, new_sinusoidal_function, new_complicated_function]
for func in functions:
# Generate training data
np.random.seed(0)
x_train = np.random.uniform(-5, 5, 100)
y_train = func(x_train)
# Build and train the model
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, verbose=0)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = func(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function ({func.__name__})', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Function Complexity ({func.__name__})')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = func(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step
WARNING:tensorflow:5 out of the last 109 calls to <function TensorFlowTrainer.make_test_function.<locals>.one_step_on_iterator at 0x0000024DB6D78900> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. Test Loss (MSE): 0.022746514528989792 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 14ms/step
WARNING:tensorflow:6 out of the last 111 calls to <function TensorFlowTrainer.make_test_function.<locals>.one_step_on_iterator at 0x0000024DC0767560> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. Test Loss (MSE): 0.07341403514146805 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step
Test Loss (MSE): 0.013267316855490208
Complexity of Target Function: More complex functions (e.g., complicated_function) may require deeper or differently structured networks to accurately capture their behavior.
# Define a function to create and train models with different architectures
def train_model(num_layers, num_neurons):
# Generate training data
np.random.seed(0)
x_train = np.random.uniform(-5, 5, 100)
y_train = linear_function(x_train)
# Build and train the model
model = keras.Sequential()
model.add(keras.layers.Dense(num_neurons, activation='relu', input_shape=(1,)))
for _ in range(num_layers - 1):
model.add(keras.layers.Dense(num_neurons, activation='relu'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, verbose=0)
return model
# Test different architectures
architectures = [(1, 64), (2, 32),(3,20), (3, 16),(4,8)]
for layers, neurons in architectures:
model = train_model(layers, neurons)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = linear_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function ({layers} layers, {neurons} neurons)', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Network Architecture ({layers} layers, {neurons} neurons)')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = linear_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
Test Loss (MSE): 0.21619276702404022 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.0075163464061915874 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
Test Loss (MSE): 0.04090062156319618 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
Test Loss (MSE): 0.02696695737540722 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step
Test Loss (MSE): 0.057555828243494034
# Define a function to create and train models with different architectures
def train_model(num_layers, num_neurons):
# Generate training data
np.random.seed(0)
x_train = np.random.uniform(-5, 5, 100)
y_train = sinusoidal_function(x_train)
# Build and train the model
model = keras.Sequential()
model.add(keras.layers.Dense(num_neurons, activation='relu', input_shape=(1,)))
for _ in range(num_layers - 1):
model.add(keras.layers.Dense(num_neurons, activation='relu'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, verbose=0)
return model
# Test different architectures
architectures = [(1, 64), (2, 32),(3,20), (3, 16),(4,8)]
for layers, neurons in architectures:
model = train_model(layers, neurons)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = sinusoidal_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function ({layers} layers, {neurons} neurons)', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Network Architecture ({layers} layers, {neurons} neurons)')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = sinusoidal_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step
Test Loss (MSE): 0.09151393920183182 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.016277417540550232 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.005629471968859434 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step
Test Loss (MSE): 0.01031306479126215 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step
Test Loss (MSE): 0.005127677693963051
# Define a function to create and train models with different architectures
def train_model(num_layers, num_neurons):
# Generate training data
np.random.seed(0)
x_train = np.random.uniform(-5, 5, 100)
y_train = complicated_function(x_train)
# Build and train the model
model = keras.Sequential()
model.add(keras.layers.Dense(num_neurons, activation='relu', input_shape=(1,)))
for _ in range(num_layers - 1):
model.add(keras.layers.Dense(num_neurons, activation='relu'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=100, verbose=0)
return model
# Test different architectures
architectures = [(1, 64), (2, 32),(3,20), (3, 16),(4,8)]
for layers, neurons in architectures:
model = train_model(layers, neurons)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = complicated_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function ({layers} layers, {neurons} neurons)', color='red', linestyle='--')
plt.scatter(x_train, y_train, color='blue', label='Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Network Architecture ({layers} layers, {neurons} neurons)')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = complicated_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step
Test Loss (MSE): 0.021739527583122253 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step
Test Loss (MSE): 0.016540419310331345 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.0073663461953401566 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step
Test Loss (MSE): 0.005080488510429859 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
Test Loss (MSE): 0.013431170023977757
Number of Layers and Neurons: Deeper networks (more layers) or wider networks (more neurons per layer) can potentially capture more complex patterns in the data but may also lead to overfitting if not properly regularized.
In this phase, we'll add various levels of noise to the data points generated in the previous section and observe how the neural network model handles this noise. Our goal is to report the network's performance across different levels of noise, ranging from minimal to substantial, and analyze its ability to maintain accuracy in the presence of noise.
First, we will add different levels of noise to the training data points. We will use a normal distribution to generate the noise, ranging from values close to zero (low noise) to larger values (high noise).
# Define a function to add noise to data
def add_noise(data, noise_level):
noisy_data = data + np.random.normal(0, noise_level, size=data.shape)
return noisy_data
# Generate noisy training data
noise_levels = [0.1, 0.5, 1.0] # Different noise levels
x_noisy_train_list = []
for noise_level in noise_levels:
x_noisy_train = add_noise(x_train, noise_level)
x_noisy_train_list.append(x_noisy_train)
# Visualize training data with noise
plt.figure(figsize=(12, 8))
for i, x_noisy_train in enumerate(x_noisy_train_list):
plt.scatter(x_noisy_train, y_train, label=f'Noise Level={noise_levels[i]}')
plt.xlabel('x (Noisy)')
plt.ylabel('y')
plt.title('Noisy Training Data')
plt.legend()
plt.grid(True)
plt.show()
Next, we will train neural network models using the noisy training data and examine their performance.
# Train models with noisy training data
models = []
histories = []
for i, x_noisy_train in enumerate(x_noisy_train_list):
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_noisy_train, y_train, epochs=100, verbose=0)
models.append(model)
histories.append(history)
# Plot training loss for each noise level
plt.figure(figsize=(12, 8))
for i, history in enumerate(histories):
plt.plot(history.history['loss'], label=f'Noise Level={noise_levels[i]}')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss with Noisy Training Data')
plt.legend()
plt.grid(True)
plt.show()
# Define a function to evaluate model performance on test data
def evaluate_model(model, x_test, y_test):
loss, _ = model.evaluate(x_test, y_test)
return loss
# Generate clean test data
x_test = np.linspace(-5, 5, 100)
y_test = sinusoidal_function(x_test)
# Evaluate models with clean test data
test_losses = []
for i, model in enumerate(models):
test_loss = evaluate_model(model, x_test, y_test)
test_losses.append(test_loss)
# Plot test loss vs. noise level
plt.figure(figsize=(8, 6))
plt.plot(noise_levels, test_losses, marker='o')
plt.xlabel('Noise Level')
plt.ylabel('Test Loss')
plt.title('Effect of Noise on Test Loss')
plt.grid(True)
plt.show()
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 3.1949 - mae: 1.4338 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.8063 - mae: 1.2843 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.2500 - mae: 1.1343
# Generate training data
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
y_train = sinusoidal_function(x_train)
# Function to add noise to data points
def add_noise(data, noise_level):
noisy_data = data + np.random.normal(0, noise_level, len(data))
return noisy_data
# Test different noise levels
noise_levels = [0.0,0.02, 0.11, 0.21,0.45, 0.89]
for noise_level in noise_levels:
# Add noise to data points
x_noisy = add_noise(x_train, noise_level)
y_noisy = add_noise(y_train, noise_level)
# Build and train the model with noisy data
model = keras.Sequential([
keras.layers.Dense(100, activation='relu', input_shape=(1,)),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_noisy, y_noisy, epochs=100, verbose=0)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = sinusoidal_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function (Noise Level: {noise_level})', color='red', linestyle='--')
plt.scatter(x_noisy, y_noisy, color='blue', label='Noisy Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Noise on Network Performance (Noise Level: {noise_level})')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = sinusoidal_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
Test Loss (MSE): 0.00405350374057889 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 12ms/step
Test Loss (MSE): 0.009962653741240501 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.013466348871588707 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step
Test Loss (MSE): 0.009529246017336845 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.018830308690667152 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
Test Loss (MSE): 0.0875338762998581
# Generate training data
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
y_train = linear_function(x_train)
# Function to add noise to data points
def add_noise(data, noise_level):
noisy_data = data + np.random.normal(0, noise_level, len(data))
return noisy_data
# Test different noise levels
noise_levels = [0.0,0.02, 0.11, 0.21,0.45, 0.89]
for noise_level in noise_levels:
# Add noise to data points
x_noisy = add_noise(x_train, noise_level)
y_noisy = add_noise(y_train, noise_level)
# Build and train the model with noisy data
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_noisy, y_noisy, epochs=100, verbose=0)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = linear_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function (Noise Level: {noise_level})', color='red', linestyle='--')
plt.scatter(x_noisy, y_noisy, color='blue', label='Noisy Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Noise on Network Performance (Noise Level: {noise_level})')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = linear_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.0005923701683059335 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.0030749463476240635 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
Test Loss (MSE): 0.020563771948218346 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
Test Loss (MSE): 0.008553863503038883 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
Test Loss (MSE): 0.03013506904244423 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
Test Loss (MSE): 0.30540573596954346
# Generate training data
np.random.seed(0)
num_points = 100
x_train = np.random.uniform(-5, 5, num_points)
y_train = complicated_function(x_train)
# Function to add noise to data points
def add_noise(data, noise_level):
noisy_data = data + np.random.normal(0, noise_level, len(data))
return noisy_data
# Test different noise levels
noise_levels = [0.0,0.02, 0.11, 0.21,0.45, 0.89]
for noise_level in noise_levels:
# Add noise to data points
x_noisy = add_noise(x_train, noise_level)
y_noisy = add_noise(y_train, noise_level)
# Build and train the model with noisy data
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(1,)),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(x_noisy, y_noisy, epochs=100, verbose=0)
# Evaluate and plot results
x_range = np.linspace(-5, 5, 100)
y_actual = complicated_function(x_range)
y_predicted = model.predict(x_range)
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function (Noise Level: {noise_level})', color='red', linestyle='--')
plt.scatter(x_noisy, y_noisy, color='blue', label='Noisy Data Points')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Noise on Network Performance (Noise Level: {noise_level})')
plt.legend()
plt.grid(True)
plt.show()
x_test = np.linspace(0.1, 5, 50)
y_test = complicated_function(x_test)
test_loss, test_mae = model.evaluate(x_test, y_test, verbose=0)
print(f'Test Loss (MSE): {test_loss}')
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.00980453472584486 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.018481647595763206 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
Test Loss (MSE): 0.01087325531989336 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 13ms/step
Test Loss (MSE): 0.019688041880726814 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.023345937952399254 4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
Test Loss (MSE): 0.12324929982423782
image_path = 'Screen.jpg'
img = Image.open(image_path)
plt.imshow(img)
plt.axis('off')
plt.show()
csv_file_path = 'data.csv'
dataFrame = pd.read_csv(csv_file_path)
print("show data frame Information")
print(dataFrame.head())
print(dataFrame.columns.tolist())
print(dataFrame.describe())
show data frame Information
x y z
0 31 322.600006 a
1 40 335.600006 a
2 52 353.600006 a
3 63 366.600006 a
4 71 375.600006 a
['x', 'y', 'z']
x y
count 1087.000000 1087.000000
mean 353.307268 316.775719
std 200.069894 71.809046
min 31.000000 124.600006
25% 167.500000 287.100006
50% 363.000000 331.600006
75% 528.500000 364.100006
max 695.000000 408.600006
df = dataFrame.copy()
df = df[df['z'] != 'a']
X = df['x']
Y = df['y']
print(df.describe())
x y count 616.000000 616.000000 mean 339.581169 359.520461 std 197.863071 30.714820 min 31.000000 311.600006 25% 159.750000 330.600006 50% 339.500000 354.600006 75% 507.250000 390.600006 max 682.000000 408.600006
The provided code trains a deep neural network using a Sequential model architecture with multiple layers of 100 neurons each. The model is trained on input data X and target data Y for 1000 epochs using the Adam optimizer and Mean Squared Error (MSE) loss function.
The model is trained on the entire dataset (X and Y) for 1000 epochs. The training process is conducted silently (verbose=0).
After training, the model's predictions are generated over a range of input values (x_range). The actual function values (y_actual) and predicted values (y_predicted) are plotted to visualize the model's performance.
The plot displays the actual function (y_actual) in blue and the predicted function (y_predicted) in red dashed lines over the input range (x_range). This visualization helps in assessing how well the model captures the underlying function.
Overall, the model demonstrates strong predictive performance with a high R-squared value of 0.874, indicating that 87.37% of the variance in the target variable is explained by the input variables.
model = keras.Sequential([
keras.layers.Dense(100, activation='relu', input_shape=(1,)),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
history = model.fit(X, Y, epochs=1000, verbose=0)
# Evaluate and plot results
x_range = np.linspace(X.min(), X.max(), 616)
y_actual = Y
y_predicted = model.predict(x_range)
y_pred = model.predict(X)
mse = mean_squared_error(y_actual, y_predicted)
r2 = r2_score(y_actual, y_predicted)
print(f'Test Mean Squared Error (MSE): {mse}')
print(f'Test R-squared (R2): {r2}')
# Calculate percentage match (accuracy) using R-squared
percentage_match = r2 * 100
print(f'Percentage Match (R-squared): {percentage_match:.2f}%')
plt.figure(figsize=(8, 6))
plt.plot(x_range, y_actual, label='Actual Function', color='blue')
plt.plot(x_range, y_predicted, label=f'Predicted Function ', color='red', linestyle='--')
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Effect of Function Complexity ')
plt.legend()
plt.grid(True)
plt.show()
c:\Users\Mahdi\anaconda3\Lib\site-packages\keras\src\layers\core\dense.py:86: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(activity_regularizer=activity_regularizer, **kwargs)
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step 20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step Test Mean Squared Error (MSE): 71.46253749214614 Test R-squared (R2): 0.9241268559226037 Percentage Match (R-squared): 92.41%
In this phase of the project, we will use a neural network for image classification using the Fashion-MNIST dataset. The Fashion-MNIST dataset consists of grayscale images of fashion items categorized into 10 classes, such as shirts, shoes, bags, etc. Each image has dimensions of 28x28 pixels.
Steps:
# Load the Fashion-MNIST dataset
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# Normalize the pixel values to be between 0 and 1
train_images = train_images.astype('float32') / 255.0
test_images = test_images.astype('float32') / 255.0
# Reshape the images to have a single channel (grayscale)
train_images = np.expand_dims(train_images, axis=-1)
test_images = np.expand_dims(test_images, axis=-1)
# Print the shapes of train and test datasets
print("Train Images Shape:", train_images.shape)
print("Train Labels Shape:", train_labels.shape)
print("Test Images Shape:", test_images.shape)
print("Test Labels Shape:", test_labels.shape)
Train Images Shape: (60000, 28, 28, 1) Train Labels Shape: (60000,) Test Images Shape: (10000, 28, 28, 1) Test Labels Shape: (10000,)
Now, build a neural network model for image classification. We'll use a simple Convolutional Neural Network (CNN) architecture for this task.
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10) # Output layer with 10 units (one for each class)
])
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Display the model summary
model.summary()
c:\Users\Mahdi\anaconda3\Lib\site-packages\keras\src\layers\convolutional\base_conv.py:99: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(
Model: "sequential_46"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ conv2d (Conv2D) │ (None, 26, 26, 32) │ 320 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d (MaxPooling2D) │ (None, 13, 13, 32) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_1 (Conv2D) │ (None, 11, 11, 64) │ 18,496 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_1 (MaxPooling2D) │ (None, 5, 5, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_2 (Conv2D) │ (None, 3, 3, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 576) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_163 (Dense) │ (None, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_164 (Dense) │ (None, 10) │ 650 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 93,322 (364.54 KB)
Trainable params: 93,322 (364.54 KB)
Non-trainable params: 0 (0.00 B)
Train the neural network model using the preprocessed training images and labels.
# Define the number of epochs and batch size
epochs = 10
batch_size = 32
# Train the model
history = model.fit(train_images, train_labels, epochs=epochs, batch_size=batch_size,
validation_split=0.1) # Use 10% of training data for validation
Epoch 1/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 7s 4ms/step - accuracy: 0.7312 - loss: 0.7276 - val_accuracy: 0.8538 - val_loss: 0.4010 Epoch 2/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.8740 - loss: 0.3487 - val_accuracy: 0.8823 - val_loss: 0.3125 Epoch 3/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.8962 - loss: 0.2877 - val_accuracy: 0.8928 - val_loss: 0.2907 Epoch 4/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.9052 - loss: 0.2584 - val_accuracy: 0.8963 - val_loss: 0.2761 Epoch 5/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.9154 - loss: 0.2260 - val_accuracy: 0.9035 - val_loss: 0.2657 Epoch 6/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.9246 - loss: 0.2055 - val_accuracy: 0.9087 - val_loss: 0.2513 Epoch 7/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 4ms/step - accuracy: 0.9316 - loss: 0.1870 - val_accuracy: 0.8958 - val_loss: 0.2835 Epoch 8/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 4ms/step - accuracy: 0.9372 - loss: 0.1706 - val_accuracy: 0.9108 - val_loss: 0.2600 Epoch 9/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 4ms/step - accuracy: 0.9440 - loss: 0.1518 - val_accuracy: 0.9120 - val_loss: 0.2616 Epoch 10/10 1688/1688 ━━━━━━━━━━━━━━━━━━━━ 6s 4ms/step - accuracy: 0.9473 - loss: 0.1391 - val_accuracy: 0.9137 - val_loss: 0.2651
Evaluate the trained model on the test dataset to measure its performance
# Evaluate the model on the test dataset
test_loss, test_accuracy = model.evaluate(test_images, test_labels, verbose=2)
print("\nTest Accuracy:", test_accuracy)
313/313 - 0s - 2ms/step - accuracy: 0.9074 - loss: 0.2897 Test Accuracy: 0.9074000120162964 Test Accuracy: 0.9074000120162964
# Define class labels for Fashion-MNIST dataset
FASHION_LABELS = {
0: 'Sneaker',
1: 'Bag',
2: 'Ankle boot',
3: 'Dress',
4: 'Coat',
5: 'T-shirt/top',
6: 'Shirt',
7: 'Pullover',
8: 'Sandal',
9: 'Trouser'
}
# Get predictions on test images
predictions = model.predict(test_images)
# Display a grid of actual vs. predicted labels for a subset of test images
plt.figure(figsize=(12, 14))
for i in range(16):
plt.subplot(4, 4, i+1)
plt.imshow(test_images[i].reshape(28, 28), cmap='binary')
actual_label = FASHION_LABELS[test_labels[i]]
predicted_label = FASHION_LABELS[np.argmax(predictions[i])]
title = f"Actual: {actual_label}\nPredicted: {predicted_label}"
plt.title(title)
plt.axis('off')
plt.tight_layout()
plt.show()
313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step
# Plot training history (accuracy and loss)
plt.figure(figsize=(8, 6))
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Accuracy/Loss')
plt.title('Training and Validation Metrics')
plt.legend()
plt.show()
# Get predicted classes and true classes
predicted_classes = np.argmax(predictions, axis=1)
true_classes = test_labels
# Compute confusion matrix
conf_matrix = confusion_matrix(true_classes, predicted_classes)
# Plot confusion matrix
plt.figure(figsize=(10, 8))
sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Reds', cbar=False)
plt.xlabel('Predicted Labels')
plt.ylabel('True Labels')
plt.title('Confusion Matrix')
plt.show()
# Calculate and print classification report
accuracy = accuracy_score(true_classes, predicted_classes)
precision = precision_score(true_classes, predicted_classes, average='weighted')
recall = recall_score(true_classes, predicted_classes, average='weighted')
f1 = f1_score(true_classes, predicted_classes, average='weighted')
print(f'Accuracy: {accuracy:.4f}')
print(f'Precision: {precision:.4f}')
print(f'Recall: {recall:.4f}')
print(f'F1-Score: {f1:.4f}')
Accuracy: 0.9074 Precision: 0.9089 Recall: 0.9074 F1-Score: 0.9078
In this final phase of the project, we aim to leverage neural networks for denoising tasks, addressing real-world scenarios where datasets often contain noise or incomplete information.
First, we'll load the Fashion-MNIST dataset and create noisy versions of the input images
# Load Fashion-MNIST dataset
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# Normalize pixel values to be between 0 and 1
train_images = train_images / 255.0
test_images = test_images / 255.0
# Function to add Gaussian noise to images
def add_gaussian_noise(images, mean=0, std=0.1):
noise = np.random.normal(mean, std, images.shape)
noisy_images = np.clip(images + noise, 0, 1)
return noisy_images
# Create noisy versions of training and testing images
train_images_noisy = add_gaussian_noise(train_images)
test_images_noisy = add_gaussian_noise(test_images)
Next, define clean (X_clean) and noisy (X_noisy) datasets for training and testing the denoising autoencoder.
# Define clean and noisy datasets
X_clean_train, X_noisy_train = train_images, train_images_noisy
X_clean_test, X_noisy_test = test_images, test_images_noisy
Now, let's construct an autoencoder model using TensorFlow/Keras for denoising the images
# Define the autoencoder architecture
def build_autoencoder():
input_img = Input(shape=(28, 28, 1))
# Encoder
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# Decoder
x = Conv2D(64, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
return autoencoder
# Create the autoencoder model
autoencoder = build_autoencoder()
# Display the autoencoder architecture
autoencoder.summary()
# Train the autoencoder
autoencoder.fit(X_noisy_train, X_clean_train,
epochs=20,
batch_size=128,
shuffle=True,
validation_data=(X_noisy_test, X_clean_test))
Model: "functional_87"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_47 (InputLayer) │ (None, 28, 28, 1) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_3 (Conv2D) │ (None, 28, 28, 32) │ 320 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_2 (MaxPooling2D) │ (None, 14, 14, 32) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_4 (Conv2D) │ (None, 14, 14, 64) │ 18,496 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_3 (MaxPooling2D) │ (None, 7, 7, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_5 (Conv2D) │ (None, 7, 7, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ up_sampling2d (UpSampling2D) │ (None, 14, 14, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_6 (Conv2D) │ (None, 14, 14, 32) │ 18,464 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ up_sampling2d_1 (UpSampling2D) │ (None, 28, 28, 32) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_7 (Conv2D) │ (None, 28, 28, 1) │ 289 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 74,497 (291.00 KB)
Trainable params: 74,497 (291.00 KB)
Non-trainable params: 0 (0.00 B)
Epoch 1/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 25ms/step - loss: 0.0350 - val_loss: 0.0108 Epoch 2/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 26ms/step - loss: 0.0101 - val_loss: 0.0087 Epoch 3/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 28ms/step - loss: 0.0078 - val_loss: 0.0067 Epoch 4/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 28ms/step - loss: 0.0065 - val_loss: 0.0059 Epoch 5/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 27ms/step - loss: 0.0058 - val_loss: 0.0058 Epoch 6/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 26ms/step - loss: 0.0054 - val_loss: 0.0052 Epoch 7/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 26ms/step - loss: 0.0050 - val_loss: 0.0048 Epoch 8/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 27ms/step - loss: 0.0048 - val_loss: 0.0047 Epoch 9/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 27ms/step - loss: 0.0045 - val_loss: 0.0045 Epoch 10/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 28ms/step - loss: 0.0044 - val_loss: 0.0043 Epoch 11/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 27ms/step - loss: 0.0043 - val_loss: 0.0042 Epoch 12/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 26ms/step - loss: 0.0042 - val_loss: 0.0041 Epoch 13/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 27ms/step - loss: 0.0041 - val_loss: 0.0040 Epoch 14/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 28ms/step - loss: 0.0040 - val_loss: 0.0042 Epoch 15/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 28ms/step - loss: 0.0039 - val_loss: 0.0039 Epoch 16/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 28ms/step - loss: 0.0038 - val_loss: 0.0038 Epoch 17/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 26ms/step - loss: 0.0038 - val_loss: 0.0041 Epoch 18/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 26ms/step - loss: 0.0037 - val_loss: 0.0037 Epoch 19/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 13s 27ms/step - loss: 0.0037 - val_loss: 0.0036 Epoch 20/20 469/469 ━━━━━━━━━━━━━━━━━━━━ 12s 25ms/step - loss: 0.0036 - val_loss: 0.0037
<keras.src.callbacks.history.History at 0x24de83ea690>
After training the denoising autoencoder, evaluate its performance on the test set and analyze the results.
# Use the trained autoencoder to denoise test images
denoised_images = autoencoder.predict(X_noisy_test)
# Display original, noisy, and denoised images
n = 10 # Number of images to display
plt.figure(figsize=(20, 4))
for i in range(n):
# Display original image
ax = plt.subplot(3, n, i + 1)
plt.imshow(X_clean_test[i].reshape(28, 28), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title('Original')
# Display noisy image
ax = plt.subplot(3, n, i + 1 + n)
plt.imshow(X_noisy_test[i].reshape(28, 28), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title('Noisy')
# Display denoised image
ax = plt.subplot(3, n, i + 1 + 2 * n)
plt.imshow(denoised_images[i].reshape(28, 28), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.set_title('Denoised')
plt.tight_layout()
plt.show()
313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step
Finally, vary the levels of noise and assess the denoising performance across different experiments.
# Evaluate and compare denoising performance using Mean Squared Error (MSE)
mse = np.mean(np.square(X_clean_test - denoised_images.reshape(-1, 28, 28)))
print(f"Mean Squared Error (MSE) for Denoised Images: {mse}")
Mean Squared Error (MSE) for Denoised Images: 0.003733864536139465
The denoising results are presented visually for a sample of test images at each noise level. By observing the images and corresponding MSE values, we can analyze the effectiveness of the autoencoder in removing noise across different noise intensities.
The experiment provides insights into how the autoencoder performs under varying noise conditions and helps assess its robustness to different levels of noise in the input data.